
In order to get the highest quality results from large language models, it's essential to use certain methods that are crucial for success. However, these methods can be quite complex for people who aren't prompt engineers. Today, I'll be talking about how to create clear instructions without relying on complex methods, because clear instructions are the most important factor in creating prompts. If the model can't understand what you want, you won't get a quality response.
Expanding the prompt with more details
Imagine we need a blog post. A poor prompt would be "Write a blog about AI." This prompt isn't well-crafted because it lacks clear instructions. The LLM will generate a response based on its own understanding, but in most cases, we won't get what we want. To fix this, we need to provide more details about the kind of blog we want, the topic the model should focus on, the desired length of the response, and so on. Taking all this into account, a better prompt might look like this: "Write a blog about AI that focuses on LLMs. I want to explain how LLMs are changing the world. The blog should be 500 words long, written in a formal yet friendly style, and make use of markdown features to enhance its presentation." This is just an example, and as you can see, these details are small but can significantly improve the response.
Providing a role
Providing a role is a well-known technique to improve response quality. It simply involves assigning a role to the model based on the task. Many people use this method, and it's quite simple. At the beginning of the prompt, write the role and explain its tasks. For example, if we need a prompt to teach us economics, instead of sending "Explain economy to me," we can provide a role by writing "You are now an economics teacher, and your task is to explain economics to me in a beginner-friendly style." Additionally, we can add a final part asking the model to explain as if to a high school student if we want the explanation to be even simpler or to young child.
Extracting a specific part of the text
Sometimes we need to use LLMs to analyze, edit, or correct a piece of text. We often get unrelated responses if we don't separate that text in some way. For example, many people have tried to edit a prompt using another prompt, but the LLM would follow the prompt to be edited instead of the instructions on how to edit it. To solve this problem, we need to extract the second prompt using delimiters. These can be any characters, most commonly quotation marks. After extracting the prompt, people often use the "sandwich structure," where they write both before and after the extracted part that it's the extracted part, to ensure that the LLM won't follow it as instructions. Sandwich structure is also good for other tasks, where we want to make sure that LLM understood something important.
Specify the steps required to complete a task
This is very helpful, especially for difficult and complex tasks. It involves explaining the steps to reach a solution, and in combination with reasoning methods, we can create fantastic prompts. However, even if you don't know how to use reasoning methods, you can use this technique on its own. Simply explain the steps to reach the solution, for example, "Step 1 - Do this, Step 2 - After that do this, Step 3 - etc." This can greatly help the model solve the task more easily and can be used for any kind of task, including generating something new or finding an answer to something.
Provide examples
To get a response that suits our needs, we need to provide not only instructions but also examples. Sometimes models can't understand everything just from instructions, so examples with good quality that directly show what we want can further improve the quality of the response. Examples help the model understand length, writing style, format, and structure, and they're not difficult to create. Simply say "Here is an example: " and put the example in quotation marks. This method is also known as the few-shots method, and if we use it with reasoning methods, we can get Chain of Thoughts, Tree of Thoughts, etc.
When you combine all these methods, you can create fantastic prompts that will give you the highest quality results within your prompt-writing skills. It's not hard, it takes 5 to 15 minutes of writing, and believe me, it can help you a lot in using Large Language Models. I hope you enjoyed this and learned something new. Don't forget to follow for more interesting posts, and if you need someone to work on a project, feel free to contact me.